Rumbaugh et al.: Object-Oriented Modeling and Design


(review on a book)

When the battle is over, everyone is an excellent commander - the Czech proverb says. OO Modeling and Design has been published almost five years ago (1991) and much of the OMT methodology has changed since then. Nevertheless, people still use it as a basic textbook of OO methodology. What valuable they can still learn from this book and what they should be critical of?

This review tries to point out

Topics follow:
Contents Overview
Commentary
Further evolution of OMT
Summary
Glossary
References
Particular Remarks

Remarkable quotations:

What parts of the book are especially worth of reading?

On the authors

(see James Rumbaugh in the picture below)

All the authors were computer scientists at General Electric Corporate Research and Development, Schenectady, New York, (US). Nothing more is known to me about Michael Blaha, William Premerlani and William Lorensen. James Rumbaugh became recently a fellow at Rational Software Corporation, 2800 San Tomas Expy., Santa Clara, CA. Frederick Eddy has worked at GE for 20 years. Both James Rumbaugh and Frederick Eddy are popular speakers on OMT in the US and Europe (see RSE 2/94).

Bibliographic record:

Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F., Lorensen, W.: Object-Oriented Modeling and Design, Prentice-Hall, Englewood Cliffs, NJ 1991

OMT: Contents Overview


The book consists of four main parts

Modeling concepts
explains concepts of the OO approach. Concepts are language independent and fundamental to the rest of the book. OMT builds three models: object, functional and dynamic (in accord with three aspects of the system) - static structure of objects and relationships, computational structure of functions and values, control structure of events and states.
Methodology
describes the consecutive phases of the program development: problem statement, analysis, system design, object design. All phases but the last one are language independent. Analysis determines a model of problem to be solved. System design constructs architecture of the system. Object design elaborates particular objects: chooses algorithms, chooses ways how associations are to be implemented and introduces other issues specific for particular implementation.
Implementation
gives guidelines for implementation in various environments: OO languages, non-OO languages, relational databases. Good programming style is discussed in terms of reusability, extensibility, robustness and programming-in-the-large.
Applications
contains three case studies developed by the authors at General Electric Research and Development Center:

Every chapter contains

The book ends with Glossary of terms, with answers to selected exercises and with index.

Go back to: Start of this review, RSE Contents

OMT: Commentary


The topics follow:

Go back to: Start of this review, RSE Contents

Purpose of system

System requirements express various aspects of a common purpose of the system. The system is analyzed and its parts are recognized only in context of the system purpose. Any subsystem, any object class and any object instance doesn't make sense independently on the system purpose.

Purpose of the system is the first thing, the analyst should be interested in. The purpose of the system can be expressed in the problem statement or it can be synthesized out of use-cases (like in OOSE). The purpose of the system should be agreed with requestor to establish its common understanding correctly. That is the first step on the way to developing valid system that will satisfy the needs and expectations of the requestor.

OMT doesn't mention purpose of the system. Purpose of the system has to be grasped intuitively. Correct understanding of the purpose is neglected. Subsystems, classes and object instances are not checked against the purpose.

Go back to: Start of commentary, Start of this review, RSE Contents

Modeling the real world

It's probably true that

On contrary, e.g. a power station can't be programmed this way (see problems that require software engineer to be also an expert in them).

The question remains, though: what is the real world, what are the real world concepts, real objects etc.? I asked this question several times but I got no answer yet. We could discuss vague terms as long as we would wish. But we can not base an engineering on vague terms. As I still can not seriously recognize the real world modeling, I hope a bit of philosophy might be helpful for understanding the problem. Let's try to realize the distinction between objective and subjective, what objective reality means, what truth means, what are the ways of cognition. Let's start discussing in more detail how system can be specified...

Go back to: Start of commentary, Start of this review, RSE Contents

How system can be specified

Let's remember: specification concerns what is specific (and in the special case of system specification, it concerns what is specific for the system being specified).

Specification does not concern

System specification is always specification of external behavior of the system, nothing else.

OMT doesn't approach fixing the system boundaries. It doesn't state criteria that make possible to differentiate objects inside the system from objects outside the system. If the system boundary remains unknown, behavior of the system to the external world can not be determined. Consequently, the system specification can not be obtained (i.e. requirements can not be stated).

What good is a methodology of software engineering when the software engineer doesn't know what problem is to be solved? What kind of engineering it is that solves unformulated problems? How the software engineer can find out whether the results of his work are valid (i.e. whether he has done, what had been to be done)?

All those problems can be solved in OMT and some of them already have been solved. This is not a crucial objection against how OMT specifies systems. Rather the distinction between OMT or OOA and OBA or OOSE gets apparent. This distinction manifests the deep discord between two major approaches to the process of cognition - mysticism and behaviorism.

The topics follow:

Go back to: Start of commentary, Start of this review, RSE Contents

Mysticism

The mystics believe that truths are absolute and exist only objectively in the world. The truths are considered to be one aspect of some spiritual being (e.g. God). Truths are basically hidden but they can be recognized. How? The unity with the spiritual being (and this way with the truths themselves) must be reached first (e.g. by prayer or meditation). This way, the human being can participate in the truths and truths become revealed to her or him. She or he can even tell the recognized truths then to other persons - but only to persons being in spiritual unity with the truths. Who doesn't want to understand, he can't understand.

So, truth can't be recognized unless

Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents

Behaviorism

American behaviorism (which can be considered as a more recent development stage of empiricism and of Marxist gnoseology that is well known to me) admits that the absolute, hidden truths exist objectively. As regards cognition, only relative, partial and subjective truths can be learned by individuals in every moment. Cognition is a never ending, stepwise process of verifying the old knowledge, denying it and grasping more and more true understanding of the objective reality. Every particular knowledge can be acquired by experiencing behavior of the objective reality. The question remains, though, how pieces of knowledge constitute understanding?

It happens sometimes that the newly acquired knowledge contradicts another knowledge. (Thus, correct thinking is assumed but rules of correct thinking are not explained in detail.) Behaviorists base their judgment on experience - the truth should be verified practically and the knowledge should be corrected. This way, understanding gets one step closer to the absolute truth.

So, truth is approved by neither

but by

Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents

Rationalism

Rationalists assume another cognitive process: thinking. They believe, the correct thinking makes the hidden truths apparent to them. They assume, they know rules of correct thinking. The question remains though, whether the rules are correct? Conversely, logical incorrectness detects either incorrectness of the recognized matter or incorrectness of some logical rule. So, verifying the logical correctness carefully, precisely and systematically allows

Making judgments among various approaches to cognition definitely is not a matter of software engineering. Neither observing people tossing about between the unconscious existence of the mere materia and the conscious participation in absolute truths is the issue, however amusing it may be. None of the approaches should be ignored in software engineering. Every one contains its moral that brings benefit to software engineering if applied properly.

Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents

Mysticism vs. behaviorism in software engineering

OOA, OMT and the similar mystical methodologies consider that only some absolute, invariant truths are revealed to experts. They don't consider any cognitive process at all. They completely ignore what I am (as a Marxist with state examination) scientifically convinced of: every recognized truth is relative. According to software engineering mystics, the whole software development is just a kind of periodical reengineering of the revealed systems. Thus, applicability of the mystical methodologies is limited to cases, where the software engineer may rely on the secondary cognition mediated by an expert. Let's realize, not every software system may be specified this way.

Even an expert had to create its system of concepts. He might do that indirectly, again (he might learn from another expert), but nonetheless somebody sometimes had to experience the objective reality on his own. And there was no other source of information available to him than the external effects of the part of the objective reality under exploration (i.e. behavior). What's more: his own experience with the behavior of objective reality served also as a verification that the acquired knowledge was relatively true. So far Marxist gnoseology and American behaviorism.

Mystics assume that the cognition of software engineer is mediated by expert (which may apply to expert cognition, as well). How about validity and truthfulness of the mediated knowledge? The mystics simply say: "everybody does it this way, so it must be correct". This sentence says something about the pure validity of the recognized matter but it says nothing of how far the recognized matter is relatively true. Thus, it is a relevant way of validation, but not of verification. Accepting mediated experience tends to the greater uncertainty about the ratio of cognition than experiencing the whole process of cognition. The process of cognition involves: knowledge gathering, searching for appropriate relationships among notions, constructing the system (logically as correct as possible) and verifying it.

Conversely, OBA, OOSE and the other behavioral methodologies employ merely the technology of the immediate cognition. They ignore the technology of the indirect, mediated cognition (learning from experts). Software behaviorists have no reason to disapprove expert knowledge. That knowledge can't be obtained neither easily nor quickly. Behavior analysis may be of no use facing the complex problem and short term (which can be met otherwise).

Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents

There are problems that require software engineer to be also an expert in them

Exploring a nuclear power station with the methods of behavior analysis can cause an immediate environmental catastrophe. Modeling a nuclear power station according to information mediated by an expert is of no use except for causing environmental catastrophe... Who goes to program the steam power station, he must be software engineer, as well as expert in physics, in measurement and control, in mechanical and electrical engineering etc.

Let's consider an example, how the opening and closing of valves progresses in time. The progress of opening or closing should depend on the progress of pressure in the pipes before and behind the valve and on construction of the pipes. Leveling the pressure up should progress in such a way, so that it doesn't take too much time and, on the other hand, so that the pipes get not damaged. There is perhaps impossible finding a material or a way of construction so that the pipes can endure anything, especially if the pressures and temperatures as well as possibilities of their changes are like those in steam power station.

Some economists claim even of accounting that the programmer must be also expert in accounting (they say with at least two years of experience). I don't believe them (and know why).

Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents

Specification is not design

Problem domain can be modeled according to ideas of an expert. Such a model can be used as a system specification. It specifies well how the system behaves externally. The internal structure of the model, however, doesn't probably meet requirements on good design, since the expert is expert in problem domain but not in system design. Never mind - the specification of external behavior determines a class of all equivalent designs.

A system designer (or software engineer) is to construct another, well designed model of problem domain (e.g. problem domain component according to OOA & OOD or ideal model according to OOSE). The specification proves then validity of the construction.

Unfortunately, the mystical methodologies don't imply a conclusion that specification and construction are two different things. According to mystical methodologies, software engineers may directly use specification as a component when they construct the system. They may neglect good design of that component. This way, quality of the whole system may get spoiled.

The problem is, though, ensuring that the new, well designed system is equivalent with the original specification. Validating after the construction is done means validating late. The technology is needed that allows constructing the equivalent system directly - following some procedure would ensure the equivalence. That means, the designer must make each particular decisions with that information, which is available in the particular moment. The designer can't base his decisions on information that would be available after the design is done.

Go back to: Start of commentary, Start of this review, RSE Contents

Synchronization of lifecycles

OMT doesn't take into account lifecycles and their synchronization when particular objects in an object model are distinguished. All object modeling or entity modeling approaches that ignore object lifecycles and their correct synchronization are necessarily intuitive. They are based on vague terms like dependency. Everybody, who models data or normalizes relational databases, may differ from anybody else in how he understands dependency but they all may feel common agreement. What kind of methodology it is, where what one holds for dependency, the other may not consider to be dependency, and they both may be right?

Let's have a look at OOSE, for example. OOSE doesn't speak about dependencies but it operates with well defined terms like interaction (and we can go on: communication, synchronization, communication protocol, object lifecycle etc.) This way, obvious criteria are stated that allow

  1. recognizing objects (at all)
  2. distinguishing objects (from each other)

Objects get revealed only by their behavior. Abstracting from object behavior makes recognizing objects impossible.

Go back to: Start of commentary, Start of this review, RSE Contents

Danger of functional modeling

OMT recommends functional modeling to capture the functional aspect of the system. Functional model consists of data transformation processes, data flows and data stores. Both the stores and flows are wholes with their identity, memory and behavior. Processes are also wholes with their identity. They also have behavior. There are two kinds of processes: pure functions and sequential processes. Behavior of a pure function depends only on input data. Behavior of a sequential process depends not only on input data but on the previous history, too (which means that the process must remember history, of course, e.g. by staying in some of its internal states). Pure functions may be considered a special case of sequential processes (they remain always in the same state). Although pure functions don't need memory, sequential processes (as a more general concept) definitely may have memory. That implies: processes, flows and stores are objects.

Functional model is duplicate to both object model and dynamic model. The same thing that can be drawn as a function in functional model can be drawn as an object class in object model, too. The same applies to data flows and data stores. And how about the dynamic model? Dynamic model differs from the object model since it expresses sequencing of actions that is not expressed clearly in object model. Nonetheless, functional model expresses sequencing of actions, as well. Data must be produced prior to they are consumed. Consequently, every data flow (which connects data producer with data consumer processes) expresses sequencing of those processes implicitly. Transformation from functional to object model and backwards is possible. Transformation to dynamic model and backwards is also possible. Functional model is rather another way of drawing than another aspect of the same system.

Although transformations among models are possible, they make a little sense if various models are constructed independently. It seems difficult to find out, whether a functional model is equivalent with an object model or a dynamic model. Why? Various models may express various decompositions of the same system or subsystem. Thus, there may not exist any one-to-one mapping between models. And it is probably hard to approve, whether an appropriate structural transformation exists. Thus, consistency of the system can hardly be kept. What's more: functional model allows creating loopbacks, i.e. introducing sequential behavior and memory using the mere data flow (no stores are necessary - see figure). That kind of memory can hardly be mapped to object model.

I don't claim that functional model doesn't make sense. I claim, it is duplicate (or redundant) - so it makes the same sense as object or dynamic model. I see the problem in redundancy and inconsistency of the system rather than the functional model itself. I see as a problem what OMT admits: I may construct several mutually independent models of the same system and I can't get any knowledge about their equivalence. If I make some change in one of those models, I will not be able to make any relevant change in the other models. This way, I may introduce logical inconsistencies into the system being developed. I suppose, a methodology should prevent constructing logically inconsistent systems, not promote it.

We can see good solution of similar problem: consistency of object and dynamic model seems to be ensured well. The dynamic model might get redundant to both functional and object model. However, its role is clearly defined: it should model lifecycles of particular objects. On contrary, the object model captures structure of the object system (except of command structures) and abstracts from lifecycles and from sequencing of actions. The smalltalk believers may dislike that idea - they would like perhaps to emphasize that even the command structure is a structure of objects. As concerns me, I don't need to emphasize what János von Neumann discovered some fifty years ago: command structures and data structures are essentially the same, so they can be handled in the same manner. I would like to emphasize how the command structures are special (not every data structure is appropriate to hold commands). That approach allows me better to utilize results of at least seventy years of computer science and the essential discoveries made by Alan Turing and Alonzo Church. This is not a crucial conflict, however, the matter is nothing more than what one likes more or less.

Possible solutions appear now obviously: Functional model can be entirely omitted (it is redundant), or some clear boundary should be found between functional model and object model and between functional model and dynamic model (like the boundary that exists between dynamic model and object model). The latter proposal avoids redundancy. Other solutions may probably be found, too.

Go back to: Start of commentary, Start of this review, RSE Contents

Attribute is not a pure value

It is not correct that attribute is a pure value. Attribute can change its value - which is a sign of identity. James Rumbaugh in his series [2] in JOOP, particularly in vol.8 nr. 1 (1995), distinguishes between attribute slots and attribute values. See also particular remarks to pg. 23 and to pg. 162

In fact, attribute slot is the same object as any other object is. What does it mean? It is a whole with its own identity. It behaves and remembers. Nevertheless, the notion of attribute regards rather the association between the holder object of an attribute and the attribute slot than the attribute slot itself. The attribute association is a special kind of container-contents association.

Attribute holder and slot use the association in order to communicate and synchronize with each other. Attribute holder and slot should follow some rules of communication - a communication protocol. That communication protocol is part of their lifecycles (and also of the lifecycle of the attribute association). There are some typical constraints set on the attribute synchronization, e.g. both the holder and slot must be created at once and destroyed at once. Attribute is usually used privately inside its holder (so external messages are not rerouted to attribute by its holder).

Go back to: Start of commentary, Start of this review, RSE Contents

Layered structure of associations

Objects don't live solely but they are connected together. OMT mentions associations between two or more objects and links (instances of associations). Complex whole-part structures are called aggregations. OOSE introduces additionally communication associations as a special kind of association. Similar kind of communication connection can be found in OOA. Although communication is just a purpose, why the association is established (so it is regular association), the mere mentioning communication implies temporal point of view.

Every link (in terms of OMT) is object and has its memory as well as behavior and lifecycle. The parties (all objects connected by the link) must be known as the link is established and remembered for the whole life of the link. Behavior of a link is constituted by the communication itself. Behavior of a link is defined in the respective association - that definition can be called communication protocol.

Unfortunately, neither OMT, nor any other methodology I know, explains how the communication protocol should be designed correctly. Simplifying the methodology and abstracting from communication protocols is dangerous: although one designs the system according to a methodology, the objects may still be unable to communicate correctly. The incorrect communication may not be able to transfer the desired information, it may produce nonsense data or cause deadlocks.

In systems consisting of several dozens of object classes or more and living a dynamically rich life (instances are created and destroyed often), the direct steady links from instance to instance are definitely not a sufficient means of communication. Complex, compound long-distance temporary connections must be created, run and destroyed dynamically. The system of objects is a regular communication network, indeed. Considering just direct connections from object to object and their communication protocols applies just to the link layer of the network. But there should be designed also network, transport, presentation, session and application layers and all the respective protocols in the system (in accord with ISO/OSI network architecture - or other analogous layers and protocols in accord with another network architecture). Otherwise, although the link or even network communication protocol may be designed correctly, the system may not work well because it may crash e.g. on the session layer.

Go back to: Start of commentary, Start of this review, RSE Contents

OMT: Further evolution


OMT has recently incorporated use cases. RSE 2/94 [6] reported Frederick Eddy's lecture at OWG: "The extended methodology combines use cases with enhanced notations for scenarios and event trace diagrams to capture required behavior and drive the construction of a system model incorporating object, dynamic and functional views. During the System Design phase, mappings of the analysis model onto several candidate system architecture are evaluated so that the best implementation can be selected. This technique facilitates the matching of object-oriented systems to client-server or other distributed and parallel architectures."

The new development of 2-nd generation OMT stated some constraints on functional modeling. One of them is that only pure functions and pure values are allowed in Object-Oriented Data Flow Diagrams (OODFD). This makes the functional model similar to a logical scheme: any dynamics (and thus sequencing) is involved only implicitly by data flows (data must be produced prior to consuming them, of course). The author considers iterations and conditionals to be forms of control and recommends to avoid them. I don't see any reason for avoiding conditionals since they are pure functions (e.g. cond is a regular LISP function). Iterations can be avoided easily since any iteration can be transformed to recursion (like in LISP) or to loopback (like in logical scheme) - none of those is disabled in OODFDs. Or should be some additional constraints stated on OODFDs? Anyway, the problems mentioned in this review are still not solved.

Go back to: Start of this review, RSE Contents

OMT: Summary


There are several methodologies that can be compared to OMT:

OMT contains many useful advises and guidelines for software designers and programmers (or implementors). Obviously, there is much experience in the background of the book and any experience needs time and effort to be acquired. It is much easier to learn from other people (i.e. from the authors of OMT) than to make ones own experience (and pay for own mistakes).

The book is extraordinarily sound in dynamic modeling. Nevertheless, the underlying theory of software engineering and computer science is not explained there. Thus, practitioners with the weak theoretical background may not understand (e.g. race conditions and their proper handling).

Although extensive parts in the book are very well explained and contain much of useful information (still, 5 years after publication!), other (and unfortunately essential) places are at least vague. This applies particularly to concepts of object oriented approach, system specifications and object modeling. Additionally, the mere applicability of functional modeling is controversial. Obviously, it is easy to say it today - who knew it five years ago? Nonetheless, who goes to read the book today, he needs this information.

The reader should be well acquainted with computer science (especially with theory of abstract data types, theory of automata and theory of concurrent processes) prior to he or she starts reading Object-Oriented Modeling and Design. The vague and inaccurate places in the book may cause feeling of misunderstanding or disagreement and may provoke fruitful rethinking or discussions. Conversely, feeling of common understanding signalizes misunderstanding, indeed!

Object-Oriented Modeling and Design is still definitely worth of reading, although with proper theoretical background and some criticism.

Go back to: Start of this review, RSE Contents

OMT: Glossary of Abbreviations


CACM
Communications of the Association for Computing Machinery (journal published by ACM)
CAD
Computer Aided Design
JOOP
Journal of Object-Oriented Programming (published by SIGS Publications Inc.)
OBA
Object Behavior Analysis (see [7])
OLC
Object lifecycles (see [8])
OMT
Object Modeling Technique (see the book being reviewed)
OO
Object-Oriented
OOA, OOD
Object Oriented Analysis, Object Oriented Design (see [4])
OOSE
Object-Oriented Software Engineering (see [3])
SQL
Structured Query Language

Go back to: Start of this review, RSE Contents

OMT: References


  1. Rumbaugh, J.: Using use cases to captue requirements, JOOP (7)5, 1994
  2. Rumbaugh, J.: series on 2-nd generation OMT, JOOP (7) and (8), 1994-1995
  3. Jacobson, I. et al.: Object-Oriented Software Engineering - A use case driven approach, Addison-Wesley, Wokingham, Reading..., 1992
  4. Coad, P.; Yourdon, E.: Object-Oriented Analysis, Prentice-Hall, Englewood Cliffs 1990
  5. Rumbaugh, J.: The OMT Method (it is said to be a new book on 2-nd generation OMT)
  6. Autumn'94 in Frankfurt: Object World Germany (Conference Programme), RSE 2/94, Ivan Ryant, Prague 1995
  7. Goldberg, A.; Rubin, K.: Object Behavior Analysis, CACM (35)9:48-62, 1992
  8. Shlaer, S.; Mellor, S.: Object Lifecycles: Modeling the World in States, Prentice-Hall, Englewood Cliffs 1992

Go back to: Start of this review, RSE Contents

OMT: Particular Remarks


Parts: 1, 2, 3, 4, Glossary & Index

Go back to: Start of this review, RSE Contents

pg. ix (Preface): software development based on modeling objects from the real world
- what is the real world?
pg. 1 (Introduction): Object oriented modeling and design is a new way of thinking about problems using models organized around real-world concepts.
- what the real-world concept means?
pg. 1 (Introduction): ...objects incorporate data structure and behavior
- why data structure, not memory? distributing memory among attributes and behavior among methods is an implementation issue (if just this is necessary) - not essential feature of object!
pg. 4 (Introduction): abstractions exist in the real world
- it is mysticism (the authors suppose: some hidden truths exist objectively, not subjectively in the minds of particular people)

PART 1

pg. 5: Starting of the statement of the problem...
- it's too late! Requirement gathering should be part of analysis. What Rumbaugh et al. assume to be analysis is construction of ideal model (according to OOSE).
pg. 5: The analyst must work with the requestor to understand the problem because problem statements are rarely complete or correct.
- DEFINITELY! The more that no customer believes that he MUST participate in the analysis!
pg. 23: An attribute should be a pure data value, not an object. Unlike objects, pure data values do not have identity.
But attribute is a whole and can be distinguished from other attributes - it has identity inside its owner object. Attribute can change value - it has its behavior and memory. Attribute is object because it has the same features as any other object. (See also page 162)
pg. 25: When an operation has methods on several classes, it is important that the methods all have the same signature - the number and types of arguments and the type of result value.
- operation overloading is not considered (unlike C++ or smalltalk).
pg. 27:
My summary: Links connect object instances. Link is instance of association. Associations are inherently bi-directional (unlike in OOSE) and may connect more than two classes. Link may have its attributes (it is a regular object) (pg. 31). Association may be qualified with a special attribute. Qualifier brings some more specific information and reduces multiplicity of the association.
pg. 63: Specialization is considered not only as extension but also as restriction. Subclass may constrain ancestor attributes. For example, a circle is ellipse whose major and minor axes are equal. Because of this operations must be redefined or blocked.
This feature is not usually considered in other OO analysis and design methodologies nor supported in programming languages (see also pp. 87 and 111).
pg. 78:
useful notions: metadata, constraints, homomorphism.
pg. 87: A state is an abstraction of the attribute values and links of an object. Sets of values are grouped together into a state according to properties that affect the gross behavior of the object.
- restriction of attribute values is imposed (similarly to constraints or restrictions in object model) - O.K. (see also pg. 111) Mentioning not only attributes but also links is O.K. Speaking about values is not accurate - inductive condition is more appropriate (it is kind of predicate that relates attributes). Compare with pg. 239: "The location of control within a program implicitly defines the program state." (location, not the attribute value!) For more on inductive conditions and program states see e.g. Edsgar Dijkstra: The Science of Computer Programming or Zohar Manna: Mathematical Theory of Computation.
pg. 91-92:
Conditions (guards on transitions) and activities (take some time in state) are useful notions that allow decomposition of sequential process. Unfortunately, there are no guidelines of correct composition and decomposition of processes in the book.
pg. 99: Concurrency within the state of a single object arises when the object can be partitioned into subsets of attributes or links, each of which has its own subdiagram.
- this is not exact: partitioning sets of values is enough (state space has to be partitioned, not its components that actually bear values).
pg. 111: Inherent differences among objects are therefore properly modeled as different classes, while temporary differences are properly modeled as different states of the same class.
- interesting!
pg. 112: Beware of unwanted race conditions...
- race condition is not the only source of synchronization faults (another source: cyclic message among several processes). The matter of race conditions is not explained in the book nor techniques of avoiding race conditions.
pg. 138: Data stores are passive objects that respond to queries and updates, so the dynamic model of the data store is irrelevant to its behavior.
- but what about synchronizing processes that access the shared store? Where otherwise is the right place for the algorithm of mutual exclusion? For example, locking records is implemented in databases that manage records, not in applications that share records.

PART 2

pg. 150: A user manual for the desired system is a good problem statement
- specifies external behavior of the system. VERY WELL!
pg. 150, 151, 260: requestor, client
- the book speaks about the customer (as requestor) but it assumes the requestor to be also an expert in problem domain (he should understand the analysis model which is based on the problem-domain notions and relationships among them). Customer and expert are two different roles (another different role is user). Compare with pg. 187: "First write an initial statement, in consultation with requestors, users and domain experts."
pg. 152: The object model precedes the dynamic model and the functional model because static structure is usually better defined, less dependent on application details, more stable as the solution evolves, and easier for humans to understand.
- that is true so far as the object model abstracts from sequencing of behavior. The static structure of objects may be modeled according to expert knowledge and understanding of notions in the problem domain. The other approach is based on the fact that the only limits that are claimed by the system specification regard just external behavior of the system. Every system with the specified behavior is a correct implementation. Object structure may be derived also from specification of behavior. Anyway, objects are recognized by their behavior and by differences in their lifecycles (every object must be constructed at once as a whole and destroyed at once as a whole). Although there are other criteria, too, no of them substitutes criterion of different lifecycles. Criterion of different lifecycles shouldn't be neglected.
pg.162: If the independent existence of an entity is important, rather than just its value, then it is an object. ...The distinction often depends on the application.
- vague distinction! How about pure value? (see pg. 23) It's clear here, that attribute has its own identity, just its existence depends on (is synchronized with) its owner.
pg. 163: You may discover inheritance from the bottom up...
Or from the top down, too. Top down discovering of inheritance may precede multiple discovering of the same attributes and save significant amount of work this way...
pg. 173-179 (8.5.4 Building a state diagram):
Perfectly explained! E.g.: "The hardest thing is deciding at which state an alternate path rejoins the existing diagram. Two paths join at a state if the object "forgets" which one was taken." - exact and well comprehensible
pg. 261: Develop a state diagram for each class that has important dynamic behavior.
Although this is correct, allowing several concurrent processes for each lifecycle may decrease complexity significantly.
pg. 179: The functional model shows how values are computed, without regard for sequencing, decisions, or object structure.
Prior to using any value, the value must be generated. This induces inherent sequencing in the functional model. Functional model involves data stores and it allows loopbacks - which represents memory, which implies sequential behavior. The introductory statement (quoted here) about functional modeling is not true. This mistake invokes doubts about the correctness of functional modeling in combination with the other two models - the more, that the whole system is modeled (not only the particular transitions in dynamic model).
pg. 207: Handling global resources.
Nice overview of various global resources (e.g. mouse buttons are recognized as space resource) and various techniques of controlling access to global resources. I missed only the explicit warning that the danger of synchronization faults applies here. Unfortunately, the matter of synchronization faults is not explained in the book nor ways of avoiding them.
pg. 216 (9.10.5 Real Time System): Real-time design is complex and involves issues such as interrupt handling, prioritization of tasks, and coordinating of multiple CPUs.
Some real-time systems are, of course, complex, some are not. Interrupt handling is a kind of event handling, indeed. Concurrency (including prioritization and coordinating of processes) is natural in OO design. Although the book doesn't cover real-time design completely, OMT probably may be well applied to common real-time systems.

PART 3

pg.278: Writing code is an extension of the design process.
Why writing and why code? The programmer usually needn't write code from scratch, example solutions can be copied and changed (or previous solutions of similar problem made by the same programmer). Another issue is that development environments are equipped with various designers, resource toolkits, painters and wizards that can generate code automatically. The question is, how much of design work can be made with help of those tools instead of employing expensive CASE systems that generate poor code (or have the programmer implemented the design manually).
pg. 279: Support of concurrent threads of control is lacking in most major languages (except for Ada)... Concurrency can also be simulated within programs using coroutines...
What major languages support coroutines? I know only PDP-11 assembler, Modula-2 and perhaps Edison. By the way, the word simulated is rather confusing because coroutines implement concurrency regularly, the single processor is shared by processes like any other shared resource.
pg. 286: Robustness
Only robustness of implementation is discussed, not robustness of design. It's true that the program should be well behaved even in faulty situations. Nevertheless, the robust design is another important issue that, unfortunately, is not mentioned in the book (in contrast to e.g. OOSE).
pg. 287: Don't optimize a program until you get it working.
- worth enough to be remembered
pg. 312-318: Implementing Associations
The book explains in detail how to implement bi-directional associations. Problems with violating encapsulation of classes is discussed. It has a value in contrast to other methodologies. For example, OOSE assumes unidirectional associations to be elementary and construction of bi-directional associations is not discussed in detail anywhere.
pg. 319: Use of an OO language with a mature class library often results in a code that runs faster than that written with a non-OO language.
May be. Another source of inefficiency is the amount of code generated by OO compilers. Library classes include other classes recursively. Much of dead code is added this way to the program. Not every linker removes the dead code. Much of still additional superfluous code is linked to the program because the library classes must be well consistent and complete with regard to every possible usage. On the other hand, their full functionality is rarely exploit entirely. Thus, the program contains load, store or call instructions that are never performed but the referred modules must be linked to the program. This kind of dead code can't be removed by linker but by compiler (e.g. employing conditional compilation) - so the source code has to be recompiled. The complete recompilation may take as much as several hours or days in comparison with seconds or minutes of linking. Class templates combined with incremental compilation may provide an acceptable solution. However, the dead code removal can hardly produce shorter programs than writing all the code from scratch (which is the other undesirable extreme).
pg. 319: ...dynamic binding can be reduced to a single hash table look-up...
- in compile time, not in run time. Every virtual method can be referenced by an item in virtual method table. The position of the item in the table is known in the compile time. It needn't be found in run time and no look-up is necessary.
pg. 341: The main obstacles to a straightforward mapping come from Ada's rigid typing system and lack of procedure pointers.
There are no essential obstacles to the straightforward mapping in Ada. Instead of tricky retyping and of referencing methods with pointers, the regular mechanisms of Ada can be used alternatively. Classes can be mapped to task types and messages can be mapped to rendezvous. Nonetheless, I am afraid that amount of dynamically allocated memory may grow extensively. The reason is e.g., when a new task is created, it usually allocates hundreds of bytes of free space for its stack. This way, free memory gets full with kilobytes or megabytes of empty stacks. That is not necessary in principle. If objects are allocated dynamically (and number of objects is potentially unlimited), the system of objects has power of Turing machine. In this case, providing Turing machine with additional stacks is redundant.
pg. 343: Translating Classes Into Fortran Arrays
- smart use of arrays in common blocks! Let's notice that every name must be unique in every program unit - so variables from common blocks may have to be renamed in particular subroutines or functions. There can be probably found some dirty-trick alternatives to the mentioned mechanism. For example, program units mustn't call each other recursively (by definition of Fortran). While many Fortran implementations allow recursive calls, other implementations do not allow recursion. In the latter case, variables in program units can be allocated statically and can persist from invocation to invocation. If this is ensured in the particular Fortran implementation, the variables can be used to save attributes and the subroutine can dispatch messages. This way, attributes are kept pretty private (but the program gets not portable).
pg. 349: Implementing Inheritance in Ada
- implementation with variant records is demonstrated. Implementation with tasks is less awkward and more obvious: every descendant instance must create its ancestor instance. Then the descendant instance registers itself in the ancestor (task access to the descendant is remembered in a local variable). Task access to the ancestor is also remembered in the descendant instance - bi-directional association is established. This way, the ancestor can invoke the behavior which the descendant redefines. Conversely, the descendant can invoke the inherited behavior in the ancestor.
pg. 351: Implementing method resolution
Classical message dispatching scheme is completely ignored. Message can be implemented as any other object, not just as a procedure call. Messages can be dispatched then by event handlers that provide full polymorphism (including run-time method resolution). This approach to implementation is classical in C-language, standardized in CORBA and natural in Ada (where the rendezvous mechanism is perhaps more efficient than passing message objects in C).
pg. 375, 386: Each class maps to one or more tables.
Some classes don't map to tables but to forms, procedures, queries and other application objects. Every class has behavior. Behavior of classes resides in applications. Not all classes save their state in the persistent memory (or tables).
pg. 387:
Mapping individual attributes to tuples is discussed. As far as I know, this approach to implementation of object model is not mentioned in other books. Although it can not be recommended for manual implementation, it might be useful for development environments or CASE systems. Very interesting anyway.

PART 4

pg. 402: There is no significant dynamic model for compiler, since it is a batch transformation. and four lines farther the book reads: This pass required little analysis effort beyond figuring out a correct BNF syntax for the graphics editor language.
Every language can be accepted by a sequential machine (e.g. finite-state machine or Turing machine). This is the reason why dynamic model is relevant in this case. Although interactive processing indicates probably the need for dynamic model, the batch processing is not the strong enough reason for omitting it. By the way, BNF has expressive power of context-free language, which can not be accepted by finite-state machine. And finite-state machine is what state-transition diagrams usually describe (unless activities in states create recursion).
pg. 399-404, fig. 18.2, 18.6, 18.8 (functional models):
What objects or methods in object models are related to the operations in functional models? I can not find any...
pg. 403-411, fig. 18.7, 18.9-18.13 (object models):
I don't see any methods. There must be something wrong, where objects hold only data structures and no behavior.
pg. 416-432: Example of simulation software
- excellently illustrates the essential issues of OO approach and the methodology. It is a classical example (everybody read it!), since object-orientation comes just from the world of discrete simulation (Simula-67 language in 1967). Remarkable thing is that the example is more than ten years old, now (1984).
pg. 441: We culled the scenarios for atomic operations... We constructed dynamic models for each operation...
- Why for atomic operations? Why not for combined scenarios? (that contradicts the guidelines in 8.5.4)
pg. 445-447: State tree control
Hierarchy of states is displayed in the state-tree diagram. Good idea that I haven't known yet. It has much common with Jackson's structure diagrams - they also display states (and additionally transitions, as well) of the program in structured manner.

GLOSSARY, INDEX

pg. 454 (Glossary):
Definitions in the glossary of terms are short and inaccurate.
pg. 491 (Index):
Index seems pretty detailed.